The understanding capabilities of current state-of-the-art 3D models are limited by datasets with a small number of annotated data and a pre-defined set of categories. In its 2D counterpart, recent advances have shown that similar problems can be significantly alleviated by employing knowledge from other modalities, such as language. Inspired by this, leveraging multimodal information for 3D modality could be promising to improve 3D understanding under the restricted data regime, but this line of research is not well studied. Therefore, we introduce ULIP to learn a unified representation of image, text, and 3D point cloud by pre-training with object triplets from the three modalities. To overcome the shortage of training triplets, ULIP leverages a pre-trained vision-language model that has already learned a common visual and textual space by training with massive image-text pairs. Then, ULIP learns a 3D representation space aligned with the common image-text space, using a small number of automatically synthesized triplets. ULIP is agnostic to 3D backbone networks and can easily be integrated into any 3D architecture. Experiments show that ULIP effectively improves the performance of multiple recent 3D backbones by simply pre-training them on ShapeNet55 using our framework, achieving state-of-the-art performance in both standard 3D classification and zero-shot 3D classification on ModelNet40 and ScanObjectNN. ULIP also improves the performance of PointMLP by around 3% in 3D classification on ScanObjectNN, and outperforms PointCLIP by 28.8% on top-1 accuracy for zero-shot 3D classification on ModelNet40. Our code and pre-trained models will be released.
translated by 谷歌翻译
我们提出了对形式文件的任意查询的价值检索,以减少处理表格的人力努力。与以前的方法不同,仅解决一个固定的字段项,我们的方法基于对表单的布局和语义的理解,预测任意查询的目标值。为了进一步提高模型性能,我们提出了一种简单的文档语言建模(SimpleDLM)策略,以提高对大型模型预培训的文档理解。实验结果表明,我们的方法显着优于我们的基线,而SimpleDLM进一步提高了我们的价值检索的性能约为17 \%F1分数与最先进的预训练方法相比。代码将公开可用。
translated by 谷歌翻译
在线隐私的背景下,许多方法提出了复杂的隐私和安全保留措施来保护敏感数据。在本文中,我们争辩说:没有存储任何敏感数据是最佳的安全形式。因此,我们提出了一个在线框架,即“读完后燃烧”,即,在处理后立即删除每个在线样本。同时,我们将标记的公共数据和未标记的私人数据之间的不可避免的分布转移作为无监督域适应的问题。具体而言,我们提出了一种新的算法,旨在瞄准在线适应设置的最基本的挑战 - 缺乏不同的源目标数据对。因此,我们设计了一个跨域引导方法,称为Crodobo,以增加域中的组合分集。此外,为了充分利用各种组合中的宝贵差异,我们采用了共同监督的多个学习者的培训策略。 Crodobo在四个域适应基准上实现了最先进的在线表演。
translated by 谷歌翻译
在这项工作中,我们在具有稀疏相机视图的设置下,开发了一个可概括和高效的神经辐射场(nerf)管道,用于高保真自由观点人体合成。虽然现有的基于NERF的方法可以合成人体的相当逼真的细节,但是当输入具有自动闭塞时,它们往往会产生差的结果,特别是对于在稀疏视野下的看不见的人类。此外,这些方法通常需要大量的采样点进行渲染,这导致效率低,限制了其现实世界的适用性。为了解决这些挑战,我们提出了一种几何形状导向的进步nerf〜(GP-NERF)。特别地,为了更好地解决自动阻塞,我们设计了一种几何指导的多视图特征集成方法,该多视图特征集成方法在从输入视图集成不完全信息之前利用估计的几何形状,并构建目标人体的完整几何体积。同时,为了实现更高的渲染效率,我们引入了几何形状导向的渐进性渲染管线,其利用几何特征卷和预测的密度值来逐步减少采样点的数量并加快渲染过程。 ZJU-Mocap和Thuman数据集的实验表明,我们的方法在多种泛化设置上显着优于最先进的,而通过应用我们有效的渐进式渲染管道,时间成本降低> 70%。
translated by 谷歌翻译
尽管对象检测方面取得了很大进展,但由于实例级边界盒注释所需的巨大人性化,大多数现有方法都仅限于一小一少量的对象类别。为了减轻问题,最近的开放词汇和零射击检测方法试图检测培训期间未见的对象类别。但是,这些方法仍然依赖于一组基类上手动提供的边界盒注释。我们提出了一个开放的词汇检测框架,可以在没有手动提供边界盒注释的情况下培训。我们的方法通过利用预先训练的视觉语言模型的本地化能力来实现这一目标,并产生可直接用于训练对象探测器的伪边界盒标签。 Coco,Pascal VOC,Objects365和LVIS的实验结果证明了我们方法的有效性。具体而言,我们的方法优于使用人类注释的边界箱训练的最先进(SOTA),即使我们的培训源未配备手动边界盒标签,也可以在COCO新型类别上用3%AP培训。在利用手动边界箱标签作为基线时,我们的方法主要超过8%的AP。
translated by 谷歌翻译
谣言在社交媒体的时代猖獗。谈话结构提供有价值的线索,以区分真实和假声明。然而,现有的谣言检测方法限制为用户响应的严格关系或过度简化对话结构。在这项研究中,为了减轻不相关的帖子施加的负面影响,基本上加强了用户意见的相互作用,首先将谈话线作为无向相互作用图。然后,我们提出了一种用于谣言分类的主导分层图注意网络,其提高了考虑整个社会环境的响应帖子的表示学习,并参加可以在语义上推断目标索赔的帖子。三个Twitter数据集的广泛实验表明,我们的谣言检测方法比最先进的方法实现了更好的性能,并且展示了在早期阶段检测谣言的优异容量。
translated by 谷歌翻译
To reduce the significant redundancy in deep Convolutional Neural Networks (CNNs), most existing methods prune neurons by only considering statistics of an individual layer or two consecutive layers (e.g., prune one layer to minimize the reconstruction error of the next layer), ignoring the effect of error propagation in deep networks. In contrast, we argue that it is essential to prune neurons in the entire neuron network jointly based on a unified goal: minimizing the reconstruction error of important responses in the "final response layer" (FRL), which is the secondto-last layer before classification, for a pruned network to retrain its predictive power. Specifically, we apply feature ranking techniques to measure the importance of each neuron in the FRL, and formulate network pruning as a binary integer optimization problem and derive a closed-form solution to it for pruning neurons in earlier layers. Based on our theoretical analysis, we propose the Neuron Importance Score Propagation (NISP) algorithm to propagate the importance scores of final responses to every neuron in the network. The CNN is pruned by removing neurons with least importance, and then fine-tuned to retain its predictive power. NISP is evaluated on several datasets with multiple CNN models and demonstrated to achieve significant acceleration and compression with negligible accuracy loss.
translated by 谷歌翻译
The availability of challenging benchmarks has played a key role in the recent progress of machine learning. In cooperative multi-agent reinforcement learning, the StarCraft Multi-Agent Challenge (SMAC) has become a popular testbed for centralised training with decentralised execution. However, after years of sustained improvement on SMAC, algorithms now achieve near-perfect performance. In this work, we conduct new analysis demonstrating that SMAC is not sufficiently stochastic to require complex closed-loop policies. In particular, we show that an open-loop policy conditioned only on the timestep can achieve non-trivial win rates for many SMAC scenarios. To address this limitation, we introduce SMACv2, a new version of the benchmark where scenarios are procedurally generated and require agents to generalise to previously unseen settings (from the same distribution) during evaluation. We show that these changes ensure the benchmark requires the use of closed-loop policies. We evaluate state-of-the-art algorithms on SMACv2 and show that it presents significant challenges not present in the original benchmark. Our analysis illustrates that SMACv2 addresses the discovered deficiencies of SMAC and can help benchmark the next generation of MARL methods. Videos of training are available at https://sites.google.com/view/smacv2
translated by 谷歌翻译
文本VQA旨在回答需要了解图像中文本提示的问题。尽管现有的文本VQA方法取得了长足的进步,但它们的性能仍遭受了人类标记的问题解答(QA)对不足。但是,我们观察到,通常在现有数据集中没有完全利用场景文本 - 每个图像中只有一小部分文本参与了带注释的QA活动。这导致大量有用的信息浪费。为了解决这种缺陷,我们开发了一种新方法来通过明确利用每个图像的场景上下文中可用的现有文本来生成高质量和多样化的质量质量对。具体而言,我们建议,TAG是一种文本感知的视觉问题 - 答案生成的结构,该结构学会使用多模式变压器来生成有意义且准确的QA样品。该体系结构通过将生成的QA对与初始培训数据相结合,从而利用了未充满激光的场景文本信息,并增强了文本VQA模型的场景理解。对两个众所周知的Text-VQA基准(TextVQA和ST-VQA)的广泛实验结果表明,我们提议的标签有效地扩大了训练数据,有助于提高文本VQA性能而无需额外的标签努力。此外,我们的模型优于预先通过大规模数据进行训练的最先进方法。代码将公开可用。
translated by 谷歌翻译
动作检测的任务旨在在每个动作实例中同时推论动作类别和终点的本地化。尽管Vision Transformers推动了视频理解的最新进展,但由于在长时间的视频剪辑中,设计有效的架构以进行动作检测是不平凡的。为此,我们提出了一个有效的层次时空时空金字塔变压器(STPT)进行动作检测,这是基于以下事实:变压器中早期的自我注意力层仍然集中在局部模式上。具体而言,我们建议在早期阶段使用本地窗口注意来编码丰富的局部时空时空表示,同时应用全局注意模块以捕获后期的长期时空依赖性。通过这种方式,我们的STPT可以用冗余的大大减少来编码区域和依赖性,从而在准确性和效率之间进行有希望的权衡。例如,仅使用RGB输入,提议的STPT在Thumos14上获得了53.6%的地图,超过10%的I3D+AFSD RGB模型超过10%,并且对使用其他流量的额外流动功能的表现较少,该流量具有31%的GFLOPS ,它是一个有效,有效的端到端变压器框架,用于操作检测。
translated by 谷歌翻译